eeg emotion recognition
MSGM: A Multi-Scale Spatiotemporal Graph Mamba for EEG Emotion Recognition
Liu, Hanwen, Gong, Yifeng, Yan, Zuwei, Zhuang, Zeheng, Lu, Jiaxuan
--EEG-based emotion recognition struggles with capturing multi-scale spatiotemporal dynamics and ensuring computational efficiency for real-time applications. T o overcome these challenges, we propose the Multi-Scale Spatiotemporal Graph Mamba (MSGM), a novel framework integrating multi-window temporal segmentation, bimodal spatial graph modeling, and efficient fusion via the Mamba architecture. A multi-depth Graph Convolutional Network (GCN) and token embedding fusion module, paired with Mamba's state-space modeling, enable dynamic spatiotemporal interaction at linear complexity. MOTION recognition has emerged as a critical research frontier with far-reaching implications for human-computer interaction, mental health monitoring, and neurosci-entific exploration [1] [2] [3]. The ability to decode emotional states in real-time promises to revolutionize intelligent systems by enhancing user adaptability and bolstering clinical applications through early detection and management of emotional disorders [4] [5]. As these capabilities become increasingly vital in healthcare and artificial intelligence, there is an urgent need for robust, efficient, and neurophysiologically grounded approaches to overcome both theoretical complexities and practical deployment challenges [6]. Electroencephalography (EEG) stands out as a premier modality for emotion recognition, owing to its unparalleled capacity to non-invasively record brain activity with high temporal resolution, directly capturing the neural signatures of emotional processes [7]. Hanwen Liu and Yifeng Gong are with the School of Electronics and Communication Engineering, Sun Y at-sen University, Shenzhen, 518107, China, e-mail: (liuhw56, gongyf9)@mail2.sysu.edu.cn. Zuwei Y an is with the College of Communication Engineering, Jilin University, Changchun, 130012, China, e-mail: yanzw2422@mails.jlu.edu.cn.
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science > Emotion (0.96)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Spatial Reasoning (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.68)
Commuting Distance Regularization for Timescale-Dependent Label Inconsistency in EEG Emotion Recognition
Zeng, Xiaocong, Michoski, Craig, Pang, Yan, Kuang, Dongyang
In this work, we address the often-overlooked issue of Timescale Dependent Label Inconsistency (TsDLI) in training neural network models for EEG-based human emotion recognition. To mitigate TsDLI and enhance model generalization and explainability, we propose two novel regularization strategies: Local Variation Loss (LVL) and Local-Global Consistency Loss (LGCL). Both methods incorporate classical mathematical principles--specifically, functions of bounded variation and commute-time distances--within a graph theoretic framework. Complementing our regularizers, we introduce a suite of new evaluation metrics that better capture the alignment between temporally local predictions and their associated global emotion labels. We validate our approach through comprehensive experiments on two widely used EEG emotion datasets, DREAMER and DEAP, across a range of neural architectures including LSTM and transformer-based models. Performance is assessed using five distinct metrics encompassing both quantitative accuracy and qualitative consistency. Results consistently show that our proposed methods outperform state-of-the-art baselines, delivering superior aggregate performance and offering a principled trade-off between interpretability and predictive power under label inconsistency. Notably, LVL achieves the best aggregate rank across all benchmarked backbones and metrics, while LGCL frequently ranks the second, highlighting the effectiveness of our framework.
- North America > United States > Texas > Travis County > Austin (0.14)
- Asia > China > Guangdong Province > Zhuhai (0.04)
- Asia > China > Guangdong Province > Shenzhen (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Evaluation in EEG Emotion Recognition: State-of-the-Art Review and Unified Framework
Kukhilava, Natia, Tsmindashvili, Tatia, Kalandadze, Rapael, Gupta, Anchit, Katamadze, Sofio, Brémond, François, Ferrari, Laura M., Müller, Philipp, Wirth, Benedikt Emanuel
Electroencephalography-based Emotion Recognition (EEG-ER) has become a growing research area in recent years. Analyzing 216 papers published between 2018 and 2023, we uncover that the field lacks a unified evaluation protocol, which is essential to fairly define the state of the art, compare new approaches and to track the field's progress. We report the main inconsistencies between the used evaluation protocols, which are related to ground truth definition, evaluation metric selection, data splitting types (e.g., subject-dependent or subject-independent) and the use of different datasets. Capitalizing on this state-of-the-art research, we propose a unified evaluation protocol, EEGain (https://github.com/EmotionLab/EEGain), which enables an easy and efficient evaluation of new methods and datasets. EEGain is a novel open source software framework, offering the capability to compare - and thus define - state-of-the-art results. EEGain includes standardized methods for data pre-processing, data splitting, evaluation metrics, and the ability to load the six most relevant datasets (i.e., AMIGOS, DEAP, DREAMER, MAHNOB-HCI, SEED, SEED-IV) in EEG-ER with only a single line of code. In addition, we have assessed and validated EEGain using these six datasets on the four most common publicly available methods (EEGNet, DeepConvNet, ShallowConvNet, TSception). This is a significant step to make research on EEG-ER more reproducible and comparable, thereby accelerating the overall progress of the field.
- Europe > Slovenia > Drava > Municipality of Benedikt > Benedikt (0.04)
- Europe > France (0.04)
- Asia > China > Guangdong Province > Guangzhou (0.04)
- (9 more...)
- Health & Medicine > Health Care Technology (1.00)
- Health & Medicine > Therapeutic Area > Neurology (0.93)
- Health & Medicine > Therapeutic Area > Cardiology/Vascular Diseases (0.66)
FACE: Few-shot Adapter with Cross-view Fusion for Cross-subject EEG Emotion Recognition
Liu, Haiqi, Chen, C. L. Philip, Zhang, Tong
--Cross-subject EEG emotion recognition is challenged by significant inter-subject variability and intricately entangled intra-subject variability. Existing works have primarily addressed these challenges through domain adaptation or generalization strategies. However, they typically require extensive target subject data or demonstrate limited generalization performance to unseen subjects. Recent few-shot learning paradigms attempt to address these limitations but often encounter catastrophic overfitting during subject-specific adaptation with limited samples. This article introduces the few-shot adapter with a cross-view fusion method called F ACE for cross-subject EEG emotion recognition, which leverages dynamic multi-view fusion and effective subject-specific adaptation. Specifically, F ACE incorporates a cross-view fusion module that dynamically integrates global brain connectivity with localized patterns via subject-specific fusion weights to provide complementary emotional information. Moreover, the few-shot adapter module is proposed to enable rapid adaptation for unseen subjects while reducing overfitting by enhancing adapter structures with meta-learning. Experimental results on three public EEG emotion recognition benchmarks demonstrate F ACE's superior generalization performance over state-of-the-art methods. F ACE provides a practical solution for cross-subject scenarios with limited labeled data. NDERST ANDING Human emotions is fundamental and crucial to advancing fields such as human-computer interaction [1] and mental health [2]. Electroencephalography (EEG) has recently emerged as a remarkable tool for capturing subject's neural responses to emotional states [3]. EEG-based emotion recognition remains challenging due to the substantial inter-subject variance in brain activity patterns [4], [5]. Additionally, intra-subject variance arises from the non-stationary nature of EEG signals, which exhibit variations in frequency and amplitude over time within the same subject. Comparison of training data and processes between Few-Shot Learning (FSL) and traditional deep learning (DL) in cross-subject EEG emotion recognition.
- Asia > China > Guangdong Province > Guangzhou (0.05)
- Asia > Macao (0.04)
- North America > United States > Michigan > Washtenaw County > Ann Arbor (0.04)
- Overview (1.00)
- Research Report > New Finding (0.93)
- Research Report > Promising Solution (0.66)
A Comprehensive Survey on EEG-Based Emotion Recognition: A Graph-Based Perspective
Liu, Chenyu, Zhou, Xinliang, Wu, Yihao, Ding, Yi, Zhai, Liming, Wang, Kun, Jia, Ziyu, Liu, Yang
Compared to other modalities, electroencephalogram (EEG) based emotion recognition can intuitively respond to emotional patterns in the human brain and, therefore, has become one of the most focused tasks in affective computing. The nature of emotions is a physiological and psychological state change in response to brain region connectivity, making emotion recognition focus more on the dependency between brain regions instead of specific brain regions. A significant trend is the application of graphs to encapsulate such dependency as dynamic functional connections between nodes across temporal and spatial dimensions. Concurrently, the neuroscientific underpinnings behind this dependency endow the application of graphs in this field with a distinctive significance. However, there is neither a comprehensive review nor a tutorial for constructing emotion-relevant graphs in EEG-based emotion recognition. In this paper, we present a comprehensive survey of these studies, delivering a systematic review of graph-related methods in this field from a methodological perspective. We propose a unified framework for graph applications in this field and categorize these methods on this basis. Finally, based on previous studies, we also present several open challenges and future directions in this field.
- Asia > China (0.04)
- North America > United States > Virginia (0.04)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology > Mental Health (0.68)
Graph Neural Networks in EEG-based Emotion Recognition: A Survey
Liu, Chenyu, Zhou, Xinliang, Wu, Yihao, Yang, Ruizhi, Zhai, Liming, Jia, Ziyu, Liu, Yang
Compared to other modalities, EEG-based emotion recognition can intuitively respond to the emotional patterns in the human brain and, therefore, has become one of the most concerning tasks in the brain-computer interfaces field. Since dependencies within brain regions are closely related to emotion, a significant trend is to develop Graph Neural Networks (GNNs) for EEG-based emotion recognition. However, brain region dependencies in emotional EEG have physiological bases that distinguish GNNs in this field from those in other time series fields. Besides, there is neither a comprehensive review nor guidance for constructing GNNs in EEG-based emotion recognition. In the survey, our categorization reveals the commonalities and differences of existing approaches under a unified framework of graph construction. We analyze and categorize methods from three stages in the framework to provide clear guidance on constructing GNNs in EEG-based emotion recognition. In addition, we discuss several open challenges and future directions, such as Temporal full-connected graph and Graph condensation.
- Research Report (0.50)
- Overview (0.34)
EEG-based Emotion Style Transfer Network for Cross-dataset Emotion Recognition
Zhou, Yijin, Li, Fu, Li, Yang, Ji, Youshuo, Zhang, Lijian, Chen, Yuanfang, Zheng, Wenming, Shi, Guangming
As the key to realizing aBCIs, EEG emotion recognition has been widely studied by many researchers. Previous methods have performed well for intra-subject EEG emotion recognition. However, the style mismatch between source domain (training data) and target domain (test data) EEG samples caused by huge inter-domain differences is still a critical problem for EEG emotion recognition. To solve the problem of cross-dataset EEG emotion recognition, in this paper, we propose an EEG-based Emotion Style Transfer Network (E2STN) to obtain EEG representations that contain the content information of source domain and the style information of target domain, which is called stylized emotional EEG representations. The representations are helpful for cross-dataset discriminative prediction. Concretely, E2STN consists of three modules, i.e., transfer module, transfer evaluation module, and discriminative prediction module. The transfer module encodes the domain-specific information of source and target domains and then re-constructs the source domain's emotional pattern and the target domain's statistical characteristics into the new stylized EEG representations. In this process, the transfer evaluation module is adopted to constrain the generated representations that can more precisely fuse two kinds of complementary information from source and target domains and avoid distorting. Finally, the generated stylized EEG representations are fed into the discriminative prediction module for final classification. Extensive experiments show that the E2STN can achieve the state-of-the-art performance on cross-dataset EEG emotion recognition tasks.
Machine Learning Strategies to Improve Generalization in EEG-based Emotion Assessment: \\a Systematic Review
Apicella, Andrea, Arpaia, Pasquale, D'Errico, Giovanni, Marocco, Davide, Mastrati, Giovanna, Moccaldi, Nicola, Prevete, Roberto
Emotions are our internal compass and play a primary role in learning, reasoning, decision-making processes, and communication between individuals. The Information and Communication Technology (ICT) sector's interest in emotions has grown tremendously in recent years, shaping the concept of affective computing, an emerging field aimed at monitoring and predicting emotions in order to improve human-computer interaction Cambria et al. (2017); for instance, the introduction of affective loops makes it possible to implement increasingly adaptive human-machine interfaces and virtual assistants tailored to users Saganowski et al. (2020), or the outputs of emotion monitoring systems, in the healthcare context, can be useful in the treatment of psychological disorders based on emotional deficits, in autism Feng et al. (2018), in the improvement of wellbeing Healy et al. (2018), and in stress containment Saganowski (2022). In particular, in this context, there is a growing interest in the literature for Brain-Computer Interface (BCI) systems based on EEG signals Torres et al. (2020). In fact, the number of annual scientific publications indexed on Scopus database on the topic of EEG-based emotion recognition shows an exponential growth trend (see Figure 1). A critical issue underlying the processing and classification of EEG signals is their inherent variability among different subjects or different acquisition times (i.e.
- North America > United States > New York > New York County > New York City (0.04)
- Europe > United Kingdom > England > Staffordshire > Keele (0.04)
- Europe > Italy > Piedmont > Turin Province > Turin (0.04)
- (2 more...)
- Research Report (1.00)
- Overview (1.00)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Health Care Technology (1.00)
- Health & Medicine > Diagnostic Medicine (0.92)
EEG-based Emotion Recognition Using Multiple Kernel Learning - Machine Intelligence Research
Emotion recognition based on electroencephalography (EEG) has a wide range of applications and has great potential value, so it has received increasing attention from academia and industry in recent years. Meanwhile, multiple kernel learning (MKL) has also been favored by researchers for its data-driven convenience and high accuracy. However, there is little research on MKL in EEG-based emotion recognition. Therefore, this paper is dedicated to exploring the application of MKL methods in the field of EEG emotion recognition and promoting the application of MKL methods in EEG emotion recognition. Thus, we proposed a support vector machine (SVM) classifier based on the MKL algorithm EasyMKL to investigate the feasibility of MKL algorithms in EEG-based emotion recognition problems.
Progressive Graph Convolution Network for EEG Emotion Recognition
Zhou, Yijin, Li, Fu, Li, Yang, Ji, Youshuo, Shi, Guangming, Zheng, Wenming, Zhang, Lijian, Chen, Yuanfang, Cheng, Rui
Studies in the area of neuroscience have revealed the relationship between emotional patterns and brain functional regions, demonstrating that dynamic relationships between different brain regions are an essential factor affecting emotion recognition determined through electroencephalography (EEG). Moreover, in EEG emotion recognition, we can observe that clearer boundaries exist between coarse-grained emotions than those between fine-grained emotions, based on the same EEG data; this indicates the concurrence of large coarse- and small fine-grained emotion variations. Thus, the progressive classification process from coarse- to fine-grained categories may be helpful for EEG emotion recognition. Consequently, in this study, we propose a progressive graph convolution network (PGCN) for capturing this inherent characteristic in EEG emotional signals and progressively learning the discriminative EEG features. To fit different EEG patterns, we constructed a dual-graph module to characterize the intrinsic relationship between different EEG channels, containing the dynamic functional connections and static spatial proximity information of brain regions from neuroscience research. Moreover, motivated by the observation of the relationship between coarse- and fine-grained emotions, we adopt a dual-head module that enables the PGCN to progressively learn more discriminative EEG features, from coarse-grained (easy) to fine-grained categories (difficult), referring to the hierarchical characteristic of emotion. To verify the performance of our model, extensive experiments were conducted on two public datasets: SEED-IV and multi-modal physiological emotion database (MPED).